Algorithmic States of Exception
ثبت نشده
چکیده
In this paper I argue that pervasive tracking and data-mining are leading to shifts in governmentality that can be characterised as algorithmic states of exception. I also argue that the apparatus that performs this change owes as much to everyday business models as it does to mass surveillance. I look at technical changes at the level of data structures, such as the move to NoSQL databases, and how this combines with data-mining and machine learning to accelerate the use of prediction as a form of governance. The consequent confusion between correlation and causation leads, I assert, to the creation of states of exception. I set out what I mean by states of exception using the ideas of Giorgio Agamben, focusing on the aspects most relevant to algorithmic regulation: force-of and topology. I argue that the effects of these states of exception escape legal constraints such as concepts of privacy. Having characterised this as a potentially totalising change and an erosion of civil liberties, I ask in what ways the states of exception might be opposed. I follow Agamben by drawing on Walter Benjamin's concept of pure means as a tactic that is itself outside the frame of law-producing or law-preserving activity. However, the urgent need to respond requires more than a philosophical stance, and I examine two examples of historical resistance that satisfy Benjamin's criteria. For each in turn I draw connections to contemporary cases of digital dissent that exhibit some of the same characteristics. I conclude that it is possible both theoretically and practically to resist the coming states of exception and I end by warning what is at stake if we do not. Threat models and data models Edward Snowden's revelations (Electronic Frontier Foundation, 2014) are shocking to many because they suggest that the internet has been set to spy on us. Rather than being platforms for the free exchange of knowledge, the leaked documents show the internet and the web to be covered in surveillance machines that do not discriminate between suspects and the general population. Yet as disturbing as this picture might be, it is at the same time a diversion. By pointing the finger at the NSA and GCHQ, the revelations divert attention from the mechanisms of online business. Tracking is at the heart of Silicon Valley's operations (Mozilla, 2014) and online business-as-usual depends on ferreting out as much information as possible about users; our actions are recorded, collated and sold as part of the large-scale circulation of segmented advertising profiles. It is advertising revenues that oil the wheels of Silicon Valley and the implicit social contract is that service users will accept or ignore the gathering of their information in return for well-engineered free services such as Gmail and Facebook. It is important to realise that this is as expansive as the activities of PRISM, Boundless Informant and other intelligence agency programmes. If we are taken aback by reports that GCHQ developed code to extract user information from the popular gaming app Angry Birds (ProPublica et al., 2014), we should remember that the games companies themselves are already collecting and sharing this information for marketing purposes. In this paper, I will consider the implications of this activity in terms of a threat and the way this threat is connected to big data. When security professionals discuss risk with NGO activists or journalists who have a reasonable suspicion that they are under surveillance, they will often talk in terms of defining a threat model. In other words, rather than considering security as a blanket term, it is important to consider what specific information should be secret, who might want that information, what they might be able to do to get it and what might happen if they do (Bradshaw, 2014). The emerging potential for algorithmic states of exception outlined in this paper suggests that the business model and the threat model are becoming synonymous, by giving rise to interactions that interfere with our assumptions about privacy and liberty. In addition, the flow of everyday data that is being gathered by these companies has surged into a permanent tsunami, whoes landward incursions have become known as big data. The diverse minutiae of our digital interactions on the web and in the world (through smartphones, travel passes and so on) are aggregated in to this new object of study and exploitation. Industry tries to capture big data through definitions like 'volume, velocity and variety' (Gartner, 2011) so it can be positioned as both El Dorado (McKinsey, 2011) and panacea (Hermanin and Atanasova, 2013), perhaps unconsciously recapitulating alchemical notions of the Philosophers’ Gold. Critics counter with questions about the ability of big data's numbers to speak for themselves, their innate objectivity, their equivalence, and whether bigness introduces new problems of its own (boyd and Crawford, 2011). I will suggest that the problems and the threats are not driven by big data as such, any more than the drifting iceberg is the cause of the global warming that unloosed it. Instead we need to look at the nature of the material-political apparatus that connects data to decision-making and governance. The way society disciplines citizens through discourses of health, criminality, madness and security (Foucault, 1977) are given categorical foundations in the structures of data. Consider, for example, the category of 'troubled families' created by the Department for Communities and Local Government (Department for Communities and Local Government, 2014) to identify families as requiring specific forms of intervention from the agencies in contact with them. The 40,000 or so families whose 'lives have been turned around', by being assigned a single keyworker tasked with getting them in to work and their children back to school on a payment by results model, would have been identified through some operations on the data fields that make their existence legible to the government. In turn, various agencies and processes would have operated on those individuals as both effect and affect, as a created intensity of experiential state, in ways that would construct the subjectivity of membership of a so-called troubled family. These actions would, in turn, become new content for data fields and would form the substrate for future interventions. Thus, the proliferation of data does not simply hedge the privacy of enlightenment individuals but produces new subjectivities and forms of action. The data that enables this activity is produced by what Foucault called a dispositif: 'a heterogeneous ensemble consisting of discourses, institutions, architectural forms, regulatory decisions, laws, administrative measures, scientific statements, philosophical, moral and philanthropic propositions. Such are the elements of the apparatus. The apparatus itself is the system of relations that can be established between these elements.' (Foucault, 1980 quoted in Ruppert, 2012). This paper argues that the apparatus is undergoing a significant shift in the system of relations at several levels; in architectural forms (forms of database structures), administrative measures (as algorithms), regulation (as algorithmic regulation) and laws (as states of exception). The moral and philosophical propositions will considered at the end of the paper where I discuss potential means of resisting these shifts. At the bottom layer of this stack of changes is the architecture of database systems. For the last few decades the Relational Database Management System (RDBMS) has been a core part of any corporate or state apparatus. The relational database transcribes between informational content and action in the world. It stores data in flat tables of rows, each row containing the same set of fields. Each table represents an entity in the world (for example, a person) and the fields are the attributes of that entity (which for a person could be name, age, sexual orientation and so on). Each row in the table is an instance of that entity in the world (so one table consists of many people) and the relationships between tables model relationships in the world (for example, between the table of people and the table of families). Operations on the data are expressed in Structured Query Language (SQL) which enables specific questions to be asked in a computationally effective manner (Driscoll, 2012). It is a powerful and efficient way to manage information at scale and up till now has been well suited to the needs of organisations. However it can be a real challenge to restructure a relational database, because new kinds of data have come along or because there is too much data to store on a single server. Under the pressure of social media and big data new forms of database are emerging which drop the relational model and the use of SQL. Commonly called NoSQL databases, their relative fluidity feeds in to the social consequences I am interested in understanding. The structure of a relational database is an architecture of assumptions, built on a fixed ontology of data and anticipating the queries that can made through its arrangement of entities and relationships. It encapsulates a more-or-less fixed perspective on the world, and resists the re-inscription of the data necessary to answer a completely new and unanticipated set of questions. NoSQL dumps the neatly defined tables of relational databases in favour of keeping everything in 'schema-less' data storage (Couchbase, 2014). In NoSQL, all the varied data you have about an entity at that moment is wrapped up as a single document object it does not matter if it duplicates information stored elsewhere, and the kinds of data stored can be changed as you go along. Not only does it allow data to be spread across many servers, it allows a more flexible approach to interrogating it. The data is not stored in neatly boundaried boxes but can easily be examined at different granularities; so for example, rather than retrieving the profile photos of a certain set of users, you can use an algorithm to search eye colour. These dynamic systems can handle unstructured, messy and unpredictable data and respond in real-time to new ways of acting on patterns in the data. Like a shoal of startled fish, the application of this heterogeneous data can sharply change direction at any moment. Throwing away the need to plan a database structure beforehand or to think through the use and articulation of the data leaves a free field for the projection of the imagination. In the next section, I describe how this accelerates the established trend of data-mining and prediction and feeds new ideas about possibilities for governance. Algorithmic preemption Data is transformed in to propensities through algorithms, in particular through forms of algorithmic processing known as datamining and machine learning. Data-mining looks for patterns in the data, such as associations between variables and clusters, while machine learning enables computers to get better at recognising these patterns in future data (Hastie 2003}. Hence there exists the possibility of making predictions based on inferences from the data. In the pioneering days of data-mining the interest was in the future purchasing decisions of supermarket customers. But the potential for empirical predictions is also attractive to social structures concerned with risk management, whether those risks are related to car insurance or the likelihood of a terrorist attack. For some, the massive rise in the means of finding correlations is something to be celebrated, enabling decisions about probable disease outbreaks or risks of building fires to be based on patterns in the data (Mayer-Schonberger and Cukier, 2013). However, a probabilistic algorithm will certainly result in some false positives, where it essentially makes wrong guesses. Moreover, the 'reasoning' behind the identification of risk by an algorithm is an enfolded set of statistical patterns and may be obscure to humans, even when all the data is accessible. As a result “data mining might point to individuals and events, indicating elevated risk, without telling us why they were selected” (Zarsky, 2002). Ironically, the predictive turn introduces new risks because of the glossed-over difference between correlation and causation. I show how this is amplified as decisions based on correlations move in to the social domain below. The increasing use of prediction is colliding with our assumptions about political and judicial fairness, through preemptive predictions forms of prediction which are 'intentionally used to diminish a person’s range of future options' (Earle, 2013). A good illustration of preemptive prediction is the no-fly list of people who are not allowed to board an aircraft in the USA. The list is compiled and maintained by the United States government's Terrorist Screening Centre. People are usually unaware that they are on the list until they try to board a plane, and face legal obfuscation when they try to question the process by which they were placed on the list (Identity Project, 2013). The only way to tell if you have been taken off the list is to try to get on a flight again and see what happens. The principle of fair and equal treatment for all under the law relies on both privacy and due process, but the alleged predictive powers of big data mining are on course to clash with the presumption of innocence. In Chicago, an algorithmic analysis predicted a 'heat list' of 420 individuals likely to be involved in a shooting, using risk factors like previous arrests, drug offences, known associates and their arrest records. They received personal warning visits from a police commander, leading at least one person to worry that the attention would mis-identify him to his neighbours as a snitch (Gorner, 2013). Defending themselves against the charge that they were discriminating against the black community, the Chicago police officials referred back to the mathematical nature of the analysis. Thus preemptive measures are applied without judicial standards of evidence and that police are sometimes prepared to act on the basis of an algorithm while asserting that they do not understand the reasoning process it has carried out. While these cases may seem like outliers, the widespread adoption of algorithmic regulation may embed the same process at the core of regulatory action. The concept of algorithmic regulation is being promoted as a mechanism of social governance. One of the leading proponents is Tim O'Reilly, previously credited as a spokesman for Web 2.0 and its strategy of basing online services on user-generated data. O'Reilly and others use the term algorithmic regulation to describe this computational approach to government. They argue that the dynamic and statistical feedback loops used by corporations like Google and Facebook to police their systems against malware and spam can be used by government agencies to identify and modify social problems. These processes are already at play in the private sector; if you agree to a black box recorder in your car that tracks your driving behaviour, you will be offered a hefty discount on your car insurance (Confused.com, 2014). For policy makers, this promises a seamless upscaling of Thaler and Sunstein's theory of the 'Nudge', where small changes to the so-called choice architecture of everyday life alters people's behaviour in a predictable and desirable way (Thaler and Sunstein, 2008). The resources available to governments have been thinned by crisis-driven cuts and outsourcing, but big data brings a wealth of information. The skills of commercial datamining and machine learning are ready to probe us for proclivities of which we may or may not be aware. Algorithmic regulation seems to offer an apparatus with traction on obesity, public health and energy use through real-time interventions. But, as we have seen, this is made possible by a stack of social technologies with the tendency to escape due process through preemption and justify actions based on correlation rather than causation. How do we understand the implications of pervasive yet opaque mechanisms where correlation becomes a basis for correction or coercion? I argue that a useful lens is Giorgio Agamben's ideas about the State of Exception.
منابع مشابه
Parallel Genetic Algorithm Using Algorithmic Skeleton
Algorithmic skeleton has received attention as an efficient method of parallel programming in recent years. Using the method, the programmer can implement parallel programs easily. In this study, a set of efficient algorithmic skeletons is introduced for use in implementing parallel genetic algorithm (PGA).A performance modelis derived for each skeleton that makes the comparison of skeletons po...
متن کاملProbabilistic Sufficiency and Algorithmic Sufficiency from the point of view of Information Theory
Given the importance of Markov chains in information theory, the definition of conditional probability for these random processes can also be defined in terms of mutual information. In this paper, the relationship between the concept of sufficiency and Markov chains from the perspective of information theory and the relationship between probabilistic sufficiency and algorithmic sufficien...
متن کاملThe Role of Algorithmic Applications in the Development of Architectural Forms (Case Study:Nine High-Rise Buildings)
The process of developing architectural forms has greatly been changed by advances in digital technology, especially in design tools and applications. In recent years, the advent of graphical scripting languages in the design process has profoundly affected 3D modeling. Scripting languages help develop algorithms and geometrical grammar of shapes based on their constituent parameters. This stud...
متن کاملParallel Genetic Algorithm Using Algorithmic Skeleton
Algorithmic skeleton has received attention as an efficient method of parallel programming in recent years. Using the method, the programmer can implement parallel programs easily. In this study, a set of efficient algorithmic skeletons is introduced for use in implementing parallel genetic algorithm (PGA).A performance modelis derived for each skeleton that makes the comparison of skeletons po...
متن کاملContextualizing Obesity and Diabetes Policy: Exploring a Nested Statistical and Constructivist Approach at the Cross-National and Subnational Government Level in the United States and Brazil
Background This article conducts a comparative national and subnational government analysis of the political, economic, and ideational constructivist contextual factors facilitating the adoption of obesity and diabetes policy. Methods We adopt a nested analytical approach to policy analysis, which combines cross-national statistical analysis with subnational case study comparisons to examine...
متن کاملExperimental Evaluation of Algorithmic Effort Estimation Models using Projects Clustering
One of the most important aspects of software project management is the estimation of cost and time required for running information system. Therefore, software managers try to carry estimation based on behavior, properties, and project restrictions. Software cost estimation refers to the process of development requirement prediction of software system. Various kinds of effort estimation patter...
متن کامل